8 research outputs found

    Pigment epithelial detachment composition indices (PEDCI) in neovascular age-related macular degeneration

    Get PDF
    We provide an automated analysis of the pigment epithelial detachments (PEDs) in neovascular age related macular degeneration (nAMD) and estimate areas of serous, neovascular, and fibrous tissues within PEDs. A retrospective analysis of high-definition spectral-domain OCT B-scans from 43 eyes of 37 patients with nAMD with presence of fibrovascular PED was done. PEDs were manually segmented and then filtered using 2D kernels to classify pixels within the PED as serous, neovascular, or fibrous. A set of PED composition indices were calculated on a per-image basis using relative PED area of serous (PEDCI-S), neovascular (PEDCI-N), and fibrous (PEDCI-F) tissue. Accuracy of segmentation and classification within the PED were graded in masked fashion. Mean overall intra-observer repeatability and inter-observer reproducibility were 0.86 +/- 0.07 and 0.86 +/- 0.03 respectively using intraclass correlations. The mean graded scores were 96.99 +/- 8.18, 92.12 +/- 7.97, 91.48 +/- 8.93, and 92.29 +/- 8.97 for segmentation, serous, neovascular, and fibrous respectively. Mean (range) PEDCI-S, PEDCI-N, and PEDCI-F were 0.253 (0-0.952), 0.554 (0-1), and 0.193 (0-0.693). A kernel-based image processing approach demonstrates potential for approximating PED composition. Evaluating follow up changes during nAMD treatment with respect to PEDCI would be useful for further clinical applications

    Automatic Identification of Mixed Retinal Cells in Time-Lapse Fluorescent Microscopy Images using High-Dimensional DBSCAN

    No full text
    Despite providing high spatial resolution, functional imaging remains largely unsuitable for high-throughput experiments because current practices require cells to be manually identified in a time-consuming procedure. Against this backdrop, we seek to integrate such high-resolution technique in high-throughput workflow by automating the process of cell identification. As a step forward, we attempt to identify mixed retinal cells in time-lapse fluorescent microscopy images. Unfortunately, usual 2D image segmentation as well as other existing methods do not adequately distinguish between time courses of different spatial locations. Here, the task gets further complicated due to the inherent heterogeneity of cell morphology. To overcome such challenge, we propose to use a high-dimensional (HiD) version of DBSCAN (density based spatial clustering of applications with noise) algorithm, where difference in such time courses are appropriately accounted. Significantly, outcome of the proposed method matches manually identified cells with over 80% accuracy, marking more than 50% improvement compared to a reference 2D method

    Improved Fundus Image Quality Assessment: Augmenting Traditional Features with Structure Preserving ScatNet Features in Multicolor Space

    No full text
    High quality fundus photographs (FPs) are essential for clinicians to make accurate diagnosis of various ophthalmic diseases, including diabetic retinopathy, age-related macular degeneration, and glaucoma. Thus it becomes imperative that clinicians are presented with FPs, whose high diagnostic quality is assured. In this context, significant effort has been directed at developing automated tools that distinguish between high quality and low quality FPs. For this purpose, features suited to natural image quality assessment were traditionally employed even for diagnostic quality assessment of FPs. However, structure preserving features generated by deep scattering network (ScatNet) were recently reported to outperform aforementioned traditional features. In this paper, we demonstrate further improvement in performance by combining both the traditional features and ScatNet features. Importantly, additional improvement is witnessed when ScatNet features are computed in multicolor space

    A Step Towards Miniaturized Milk Adulteration Detection System: Smartphone-Based Accurate pH Sensing Using Electrospun Halochromic Nanofibers

    No full text
    Development of an economical miniaturized platform for monitoring inherent biophysical properties of milk is imperative for tamper-proof milk adulteration detection. Towards this, herein, we demonstrate synthesis and evaluation of a paper-based scalable pH sensor derived from electrospun halochromic nanofibers. The sensor manifests into three unique color-signatures corresponding to pure (6.6 ≤ pH ≤ 6.9), acidic (pH  6.9) milk samples, enabling a colorimetric detection mechanism. In a practical prototype, color transitions on the sensor strips are captured using smartphone camera and subsequently assigned to one of the three pH ranges using an image-based classifier. Specifically, we implemented three well-known machine learning algorithms and compared their classification performances. For a standard training-to-test ratio of 80:20, support vector machines achieved nearly perfect classification with average accuracy of 99.71%
    corecore